home Today's News Magazine Archives Vendor Guide 2001 Search isdmag.com

Editorial
Today's News
News Archives
On-line Articles
Current Issue
Magazine Archives
Subscribe to ISD


Directories:
Vendor Guide 2001
Advertiser Index
Event Calendar


Resources:
Resources and Seminars
Special Sections


Information:
2001 Media Kit
About isdmag.com
Search isdmag.com
Contact Us





DFT Takes on Test Cost in Final Combat

By Ron Wilson
Integrated System Design
Posted 10/03/01, 11:20:02 AM EDT

Test cost is becoming a major issue in chip design. That should surprise no reader of ISD. What might be surprising is the severity of the problem. Not only are test costs threatening to become the largest single component of chip cost in some system-on-chip designs, but some designs are already being judged untestable. And process advances promise to bring about the failure of some design-for-test techniques that are just emerging now.

The industry is striking back against these threats on several fronts. First, scan, only a few years ago a controversial technique, has become nearly universal. Second, driven by the realization that there is neither time nor, often, accessibility for functional testers to do a complete job, built-in self-test (BIST) techniques of various kinds are moving into the mainstream. And third, enabled by the growth in BIST, a new generation of low-cost testers is headed for the market, promising to slash capital expenditures for volume testing while improving fault coverage.

But even taken together, will these measures be enough? Many in the industry say that they will not. In all probability, not just design-for-test but architecture-for-test will grow to become as profound a problem for design teams as functional verification and timing closure are today.

Starting with scan
Scan chain insertion has been the first line of defense in test planning for some time. Scan chains typically go into the design at gate level and, with the exception of a block of scan control logic, are transparent to the RTL models. Once inserted, they provide a means of isolating blocks of logic, driving the inputs to a known state, clocking the block and capturing the resulting outputs. If done with cleverness, scan can achieve very high fault coverage, at least for stuck-at faults. And, critically important later on, scan requires only serial inputs and outputs on the die-often a four-wire interface is sufficient.

But scan leaves a number of problems to the student. The most obvious is the choice of patterns to present to the inputs of the block under test. Implicit in this problem is a major strategic decision, said Mentor Graphics DFT product manager Greg Aldrich. Originally, scan chains were used to present functional test vectors to the block, on the assumption that if the block behaved properly it wasn't broken. But with growing complexity, it is becoming clear that functional patterns are both inefficient and often insufficient. Patterns need to be structural-that is, derived from the topology of the block rather than its intended behavior. "The transition from functional to structural testing is the big issue right now," Aldrich said. "Even designs using structural scan testing will often revert to functional vectors for at-speed test. That is going to have to change, and change will impact how blocks are designed as well as how vectors are chosen."

Another problem with scan is the sheer volume of data that has to be pumped in and out of the serial test interface. One approach to this challenge is simple: Compress the patterns, expand them on-chip before routing them to the scan chains, and then compress the output before sending it off-chip. Mentor has just announced a package of RTL and supporting software for doing just this. The RTL block performs compression and expansion using a proprietary flow-through algorithm that requires only about 20 gates per scan chain, said product marketing manager David Stannard, and can achieve about a 10:1 compression ratio.

BIST to the front lines
A more sophisticated approach to this problem, but one that demands more of the design team, is to include circuitry in each block to generate the test patterns and check the results locally-in other words, BIST. BIST has been expected for on-chip memory structures for some time now. But as blocks get more complex and faster, BIST is becoming mandatory for logic blocks as well.

The transition would seem like a no-brainer: a small amount of additional logic for a huge increase in test speed. But early on it wasn't so, said SynTest Technologies Inc. president and CEO L.T. Wang. "The internal frequencies of blocks are too high for external testers, and the complexity is too great to route all the necessary signals out," Wang said. "These factors made people try BIST. But early BIST techniques imposed timing constraints that many designers found impossible. There were technology issues that led many to believe that BIST was inherently a poor option. Fortunately, more modern techniques have addressed these problems."

Fortunately, indeed, because traditional techniques are about to become hopeless, said LogicVision vice president Mukesh Mowji. "Current methods just aren't going to work for 10 million gate designs," Mowji insisted. "State-of-the-market testers are already costing $4 [million] to $6 million each, and they are falling behind on-chip clock rates. You have to segment the test task so that the chip is handling the high-complexity and high-speed tasks, and the tester is taking more of a command and control role."

That's great for logic blocks-at least some logic blocks. Vendors admit that strategies for asynchronous circuits are not well-developed yet. More of an issue are things like clocks, power grids and-everyone's favorite-analog blocks.

Most vendors say that clock, power and specialized I/O cells will continue to require an external tester. Some fast I/Os and PLLs, however, can be characterized by using clever digital techniques, Mowji said.

But analog is another matter. One vendor, Fluence Technology Inc., has developed a BIST technique for PLLs, DACs and some other common blocks by extracting voltage histograms from the output of the analog block. The technique, related to signature analysis in digital testing, bets that the histogram from a malfunctioning circuit will be different from that of a correct circuit. The bet is demonstrably a good one. Just as important, it allows BIST of analog blocks to take commands and report results using an ordinary scan interface, so no analog capability is required of the tester. And since the histogram-forming circuitry is on-chip, the faster the chip the faster the data capture. Process technology can't outrun the BIST circuitry, said product marketing manager Jon Turino.

The biggest problem cited by BIST vendors is not technology but design team attitudes. BIST, even when it imposes minimal new design rules at the register-transfer level, does impose some new ways of thinking. And it can't be shoved off on a gate-level designer to take care of off-line, the way scan insertion is often done.

BIST structures have to go in at RTL time. Getting design managers to accept responsibility for test cost and to consider new approaches has been a major effort, and it is not yet complete by any means.

Changes ahead
But the rewards in test cost are about to get bigger. Responding to the spiraling cost of test iron and the spread of BIST techniques, the tester industry itself is changing. Both established production tester companies such as Schlumberger Ltd. and startups, such as Innovis and Nextest Systems Corp., are working on a new kind of tester that assumes the existence of extensive BIST.

The new machines will provide little more than power, clock and scan-chain connections, Synopsys Inc.'s DFT manager, David Hsu, said. But tester vendors say that even more changes are in the works.

"We are looking at a generation of testers that could sell for under $500,000 instead of the current $3 million and up," said Schlumberger strategic marketing manager Rudy Garcia. But Garcia sees another key change as well. With reduced tester functionality, a sharp reduction in the number of contacts the tester must touch on each die and the opportunity for parallel test of multiple dice, the center of gravity for test is moving from packaged dice to wafer sort.

A BIST-ready tester can probe scan-chain contacts, clock and supply lines on a large number of dice at once, launch multiple BIST sequences on each die, and accomplish much of the production test job at the wafer level. The savings are obvious.

Yet problems remain. One is how to analyze the data. Traditional scan and BIST techniques collect output patterns and either compare them against reference patterns or use them to complete diagnostic data. That's fine for stuck-at faults. But increasingly, as geometries get finer, faults are not so obvious.

"At 0.1 micron, our problems really increase," Garcia warned. "Bridging defects get much more common. And bridge defects as insignificant at 100 kohms can appear as a delay fault in at-speed testing. The old stuck-at and wire-AND fault models are simply inadequate. And that's not to even mention ac coupling faults from other sources than bridges."

Data Schlumberger has collected using Sematech reference designs indicates that the most powerful weapon against deep-submicron faults will be Iddq. In this technique, a very carefully chosen set of patterns is presented to the inputs of a block, and Idd is measured and compared against a reference. The technique is splendid for uncovering a variety of faults.

But, it requires detailed enough circuit knowledge to understand the effect of failures on supply current. And, it assumes that Idd is small enough that a small change is measurable. "Leakage current is driving Idd much higher," Garcia warned, "and that's making us look for a needle in a haystack. At DAC this year, designers were reporting SoCs with static current greater than dynamic current."

So what to do?
There won't be any magic bullets. What will certainly emerge, though, is a greater consciousness of test strategy among chip architects and designers, and probably a new category of front-end testability design specialists who will engage with the design during requirements definition. JTAG will have come a long way.

---

In all probability, not just design-for-test but architecture-for-test will grow to become as profound a problem for design teams as functional verification and timing closure are today.

This focus on DFT methodologies provides a synopsis of SoC test challenges.

http://www.isdmag.com


© 2001 CMP Media LLC.
10/1/01, Issue # 13148, page 36.


 

Sponsor Links

All material on this site Copyright © 2001 CMP Media Inc. All rights reserved.